177 research outputs found

    Death and Lightness: Using a Demographic Model to Find Support Verbs

    Full text link
    Some verbs have a particular kind of binary ambiguity: they can carry their normal, full meaning, or they can be merely acting as a prop for the nominal object. It has been suggested that there is a detectable pattern in the relationship between a verb acting as a prop (a \term{support verb}) and the noun it supports. The task this paper undertakes is to develop a model which identifies the support verb for a particular noun, and by extension, when nouns are enumerated, a model which disambiguates a verb with respect to its support status. The paper sets up a basic model as a standard for comparison; it then proposes a more complex model, and gives some results to support the model's validity, comparing it with other similar approaches.Comment: LaTeX, 8 pages, uses aclap.st

    A Novel Neural Network Model for Joint POS Tagging and Graph-based Dependency Parsing

    Full text link
    We present a novel neural network model that learns POS tagging and graph-based dependency parsing jointly. Our model uses bidirectional LSTMs to learn feature representations shared for both POS tagging and dependency parsing tasks, thus handling the feature-engineering problem. Our extensive experiments, on 19 languages from the Universal Dependencies project, show that our model outperforms the state-of-the-art neural network-based Stack-propagation model for joint POS tagging and transition-based dependency parsing, resulting in a new state of the art. Our code is open-source and available together with pre-trained models at: https://github.com/datquocnguyen/jPTDPComment: v2: also include universal POS tagging, UAS and LAS accuracies w.r.t gold-standard segmentation on Universal Dependencies 2.0 - CoNLL 2017 shared task test data; in CoNLL 201

    Active learning and the Irish treebank

    Get PDF
    We report on our ongoing work in developing the Irish Dependency Treebank, describe the results of two Inter annotator Agreement (IAA) studies, demonstrate improvements in annotation consistency which have a knock-on effect on parsing accuracy, and present the final set of dependency labels. We then go on to investigate the extent to which active learning can play a role in treebank and parser development by comparing an active learning bootstrapping approach to a passive approach in which sentences are chosen at random for manual revision. We show that active learning outperforms passive learning, but when annotation effort is taken into account, it is not clear how much of an advantage the active learning approach has. Finally, we present results which suggest that adding automatic parses to the training data along with manually revised parses in an active learning setup does not greatly affect parsing accuracy

    Tree edit distance as a baseline approach for paraphrase representation

    Get PDF
    Finding an adequate paraphrase representation formalism is a challenging issue in Natural Language Processing. In this paper, we analyse the performance of Tree Edit Distance as a paraphrase representation baseline. Our experiments using Edit Distance Textual Entailment Suite show that, as Tree Edit Distance consists of a purely syntactic approach, paraphrase alternations not based on structural reorganizations do not find an adequate representation. They also show that there is much scope for better modelling of the way trees are aligned
    corecore